In this work, in order to increase the capacity of a recurrent neural network, we present a model for extracting common features and sharing them across data. As a result of using this model, extracted principle components of data will be invariant to unwanted variation More
In this work, in order to increase the capacity of a recurrent neural network, we present a model for extracting common features and sharing them across data. As a result of using this model, extracted principle components of data will be invariant to unwanted variations. The recurrent connection of the network removes the noise using a continuous attractor formed during the training phase. The defined speaker codes will be transformed to the information need for switching the continuous attractor in the input space. As a result, speaker variations can be compensated and the recognition will performed when a clean signal is available. We compared the performance of this method with a reference network described in the paper. The results show that the proposed model is more useful in removing noise and unwanted variations.
We compared the performance of this method with the reference network. The results show that the proposed model performs better in removing noise and unwanted variations, it increased the phoneme recognition accuracy about 5% when the signal to noise ratio is 0 dB.
Manuscript profile
Machine learning has been widely used over the past decades due to its wide range of applications. In most machine learning applications such as clustering and classification, data dimensions are large and the use of data reduction methods is essential. Non-negative mat More
Machine learning has been widely used over the past decades due to its wide range of applications. In most machine learning applications such as clustering and classification, data dimensions are large and the use of data reduction methods is essential. Non-negative matrix factorization reduces data dimensions by extracting latent features from large dimensional data. Non-negative matrix factorization only considers how to model each feature vector in the decomposed matrices and ignores the relationships between feature vectors. The relationships between feature vectors provide better factorization for machine learning applications. In this paper, a new method based on non-negative matrix factorization is proposed to reduce the dimensions of the data, which sets constraints on each feature vector pair using distance-based criteria. The proposed method uses the Frobenius norm as a cost function to create update rules. The results of experiments on the data sets show that the proposed multiplicative update rules converge rapidly and give better results than other algorithms.
Manuscript profile